skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Huang, Gaoping"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Prototyping use cases for augmented reality (AR) applications can be beneficial to elicit the functional requirements of the features early-on, to drive the subsequent development in a goal-oriented manner. Doing so would require designers to identify the goal-oriented interactions and map the associations between those interactions in a spatio-temporal context. Pertaining to the multiple scenarios that may result from the mapping, and the embodied nature of the interaction components, recent AR prototyping methods lack the support to adequately capture and communicate the intent of designers and stakeholders during this process. We present ImpersonatAR, a mobile-device-based prototyping tool that utilizes embodied demonstrations in the augmented environment to support prototyping and evaluation of multi-scenario AR use cases. The approach uses: (1) capturing events or steps in the form of embodied demonstrations using avatars and 3D animations, (2) organizing events and steps to compose multi-scenario experience, and finally (3) allowing stakeholders to explore the scenarios through interactive role-play with the prototypes. We conducted a user study with ten participants to prototype use cases using ImpersonatAR from two different AR application features. Results validated that ImpersonatAR promotes exploration and evaluation of diverse design possibilities of multi-scenario AR use cases through embodied representations of the different scenarios. 
    more » « less
  2. null (Ed.)
    Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks. Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers’ diverse experiences and learning behaviors. We present AdapTutAR, an adaptive task tutoring system that enables experts to record machine task tutorials via embodied demonstration and train learners with different AR tutoring contents adapting to each user’s characteristics. The adaptation is achieved by continually monitoring learners’ tutorial-following status and adjusting the tutoring content on-the-fly and in-situ. The results of our user study evaluation have demonstrated that our adaptive system is more effective and preferable than the non-adaptive one. 
    more » « less
  3. null (Ed.)
    Mobile robots and IoT (Internet of Things) devices can increase productivity, but only if they can be programmed by workers who understand the domain. This is especially true in manufacturing. Visual programming in the spatial context of the operating environment can enable mental models at a familiar level of abstraction. However, spatial-visual programming is still in its infancy; existing systems lack IoT integration and fundamental constructs, such as functions, that are essential for code reuse, encapsulation, or recursive algorithms. We present Vipo, a spatial-visual programming system for robot-IoT workflows. Vipo was designed with input from managers at six factories using mobile robots. Our user study (n=22) evaluated efficiency, correctness, comprehensibility of spatial-visual programming with functions. 
    more » « less